CN102739550B - Multi pipeline routing architecture memory allocated random copy - Google Patents

Multi pipeline routing architecture memory allocated random copy Download PDF

Info

Publication number
CN102739550B
CN102739550B CN 201210248200 CN201210248200A CN102739550B CN 102739550 B CN102739550 B CN 102739550B CN 201210248200 CN201210248200 CN 201210248200 CN 201210248200 A CN201210248200 A CN 201210248200A CN 102739550 B CN102739550 B CN 102739550B
Authority
CN
Grant status
Grant
Patent type
Prior art keywords
memory
routing
packet
tree
node
Prior art date
Application number
CN 201210248200
Other languages
Chinese (zh)
Other versions
CN102739550A (en )
Inventor
吴裔
农革
Original Assignee
中山大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Abstract

本发明公开了一种基于随机副本分配的多存储器流水路由体系结构,它包含缓存数据包的路由缓冲区,防止路由缓冲区中出现目的地址相同的数据包的包过滤器,控制排队数据包与各存储器间之间的读写访问的调度单元,以及存储单元和存储器组,路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树低层结点,并被转换为存储在索引表存储单元中的索引表,各子树包含前缀树高层结点,每棵子树有两个副本,各副本被独立地映射到存储器组中,每个到达的数据包生成一个路由信息查找请求,该请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。 The present invention discloses a multi-memory pipeline routing architecture based random copy allocated buffer cache that contains routing data packets, packet filters to prevent the same object appears routing address of the packet buffer, and the control packet queue scheduling unit between the read and write access among a memory and a memory and a storage unit group, the routing table prefix tree is cut into a plurality of the primary tree and subtrees, the master lower-level prefix tree comprising a tree node, and converted into index table stored in the index table storage unit, each sub-tree comprising a node prefix tree level, each of them has two copies of the subtree, each copy is independently mapped to memory banks, each arriving packet generate a routing information search request, the request comprising one or more memory access operations, each memory access operation target is a node in the prefix tree in memory where each destination node constituting a sequence from a leaf node to the root node the end of the search path.

Description

基于随机副本分配的多存储器流水路由体系结构 Multi pipeline routing architecture memory allocated random copy

技术领域 FIELD

[0001] 本发明涉及一种路由体系结构,具体来说,涉及一种使用随机副本分配技术和多个存储器以流水并行化多个数据包的路由信息查找操作,从而有效提高路由系统的吞吐量的基于随机副本分配的多存储器流水路由体系结构。 [0001] The present invention relates to a routing architecture, in particular, it relates to a random copy of the routing information and a plurality of memory allocation techniques of parallel flow to a plurality of data packets lookup operation, thereby effectively improving the throughput of the routing system the multiple memory pipeline routing architecture based on a random distribution of copies.

背景技术 Background technique

[0002] -个IP路由器实现两个基本功能,分别是路由功能和交换功能,当数据包到达路由器的某个输入端口时,其须依次经历路由阶段和交换阶段,在路由阶段,系统获取数据包的目的地址,并据此查找路由表以获取相应的目的输出端口号;在交换阶段,系统通过调度将数据包交换到指定的输出端口,从而将数据包转发到下一跳站点。 [0002] - a basic IP router implements two functions, namely, routing and switching, when a packet arrives at an input port of the router, which must be subjected sequentially routing stage and exchange phase, the routing stage, the data acquisition system packet's destination address, and accordingly the routing table to obtain the appropriate destination output port number; in the switching stage, the scheduling system by packet-switching to the specified output port, so as to forward the packet to the next hop station. 目前,基于前缀树的路由算法包括Binary Trie(BT)、Prefix Trie(PT)【1】、Fixed-Stride Trie (FST)【2】和Multi-Prefix Trie(MPT)【3】等算法。 Currently, the prefix tree based routing algorithm comprises Binary Trie (BT), Prefix Trie (PT) [1], Fixed-Stride Trie (FST) [2], and Multi-Prefix Trie (MPT), etc. [3] algorithm.

[0003] 在本文的陈述中用到以下术语。 [0003] The following terms used herein set forth.

[0004] 目的地址IPv4/IPv6互联网协议下,一个数据包的目的地址可表示为一个长度32/128的二进制字符串。 The [0004] destination address of the IPv4 / IPv6 internet protocol, the destination address of a data packet length may be expressed as a binary string of 32/128.

[0005] 二进制字符串一个长度为η的二进制字符串是将η个属于字符集Σ = {〇, 1}的字符按其位置依次排列形成的数组S[0. . η-1]。 [0005] a binary string length is [eta] [eta] binary string belonging to a character set Σ = {square, 1} characters formed sequentially arranged according to their position in the array S [0.. Η-1].

[0006] 二进制前缀子串二进制字符串S的前缀子串S [0. . i],表示S串中从位置0到位置1的一段字符串,也就是3[0],5[1],..,5[1]组成的字符串。 [0006] prefix substring S binary string of binary prefix substring of S [0.. I], S represents a string string from position 0 to position 1, i.e. 3 [0], 5 [1], .., 5 [1] of the strings.

[0007] 路由表一张路由表由多条记录组成,一条记录由一个前缀子项和一个输出端口号组成,一个前缀子项是一个二进制字符串,各记录的前缀子项构成一个二进制字符串集合Π0 [0007] a routing table of the routing table consisting of a plurality of records, a record of a child prefix and an output port number, a prefix child is a binary string, the prefix of each child record constituting a binary string collection Π0

[0008] 最长匹配前缀对于任意两个二进制字符串若满足S1S S 2的前缀子串,则我们称S 1是S 2的一个长度为m的匹配前缀,记作& C= S2。 [0008] If the longest matching prefix for any two binary string satisfies S1S S prefix string sub 2, we called the S 1 S 2 is a length of the matching prefix is ​​m, referred to as & C = S2. 给定一个二进制字符串集合Π 和字符串S[0.. η-1],若S' [0,.. i] e Π 且(= 6、则S'是S的一个匹配前缀。又若对S的任意匹配前缀S" [0.. j] e Π ,有矿则我们称S'为S的关于Π 的最长匹配前缀。 Given a set of [pi binary string, and the string S [0 .. η-1], if S '[0, .. i] e Π and (= 6, then S' is a matching prefix of S. If they S matching prefix of any S "[0 .. j] e Π, then we are mine called S 'with the longest matching prefix of S is about [pi.

[0009] 基于前缀树的路由算法在路由阶段,系统根据数据包的目的地址在路由表中查找与目的地址匹配的最长匹配前缀。 [0009], the system looks for the longest matching prefix match with the destination address in the routing table of the prefix tree routing algorithm based on the routing phase according to the destination address of the packet. 直观地,系统可以将目的地址与表中各记录的前缀子项依次进行比较以获得符合要求的最长匹配前缀。 Intuitively, the system may be a child prefix table with the destination address of each record is compared sequentially to obtain a longest matching prefix to meet the requirements. 显而易见,这种查找方式无法获得较高的查找效率。 Obviously, this can not find the way to achieve high search efficiency. 另一方面,将路由表转化为前缀树以支持快速路由查找的策略被广泛地研究并推广到工业生产中。 On the other hand, the routing table into a prefix tree to support policy fast route lookup been extensively studied and extended to industrial production. 不失一般性地,在一棵前缀树中,单个前缀结点可表示路由表中的一条或多条记录,各前缀结点以指针相连。 Without loss of generality, the prefix in a tree, a single prefix node may represent one or more records in the routing table, a pointer to each connected node prefix. 一个数据包的路由查找过程从前缀树的根结点开始, 直到其到达某叶子结点结束。 A packet routing lookup process starts from the root node of the prefix tree, until it reaches the end of a leaf node. 在此过程中,系统将数据包的目的地址与查找路径上的所有结点中的前缀进行匹配,并记录其间获得的最长匹配前缀。 In this process, the system prefix all nodes on the destination address of the packet with the matching search path, and recording a longest matching prefix obtained therebetween. 最终,系统返回该最长匹配前缀对应的输出端口号,数据包继而进入交换阶段。 Finally, the system returns the longest matching prefix corresponding output port number, then enter the data packet exchange phase.

[0010] 利用以上术语,我们给出一个基于前缀树的路由查找示例,为了描述方便,我们选用BT路由算法来组织路由表中的前缀,如图1和图2所示: [0010] With the above terms, we find the example given a prefix tree based routing, for convenience of description, we use the routing algorithm to organize BT prefixes in the routing table, as shown in FIGS. 1 and 2:

[0011] 根据图1所示的路由表,图2给出了采用BT路由算法构造而成的前缀树,该前缀树的每个前缀结点包含最多1个前缀。 [0011] The routing table shown in FIG. 1, FIG. 2 shows the routing algorithm constructed using BT prefix tree, each node of the prefix tree prefix comprising a prefix up. 假设到达路由器的数据包的目的地址的高8位为10101010,则该数据包的路由查找过程可描述如下:数据包访问前缀树的根结点,根结点前缀为空,匹配不成功,其根据目的地址的第0位分支搜索前缀树;目的地址第0位为1,则数据包访问根结点的右孩子,该结点前缀为空,匹配不成功,其根据目的地址的第1位分支搜索前缀树;目的地址第1位为0,则数据包访问当前结点的左孩子,该结点前缀为10*,匹配成功,最长匹配前缀更新为10*,其继而根据目的地址的第2位分支搜索前缀树。 Suppose the destination address of the packet router high 8 bits 10101010, then the packet routing lookup process can be described as follows: Packet Access prefix tree root, root prefix is ​​empty, no match, which the 0th bit branch destination address prefix tree search; access destination address bit 0 is a root node, the packet right child, the node prefix is ​​empty, the match is unsuccessful, which bit 1 according to the destination address search prefix tree branches; 1 bit destination address to access the current node is 0, the packet left child, the node prefix 10 *, the match is successful, the longest matching prefix 10 * update, which in turn according to the destination address The first two search prefix tree branch. 重复以上步骤,最终找到与目的地址匹配的路由表前缀为10*和1010*,根据最长匹配前缀的定义, 系统返回1010*对应的输出端口号R7。 Repeat the above steps, and finally found matching the destination address prefix in the routing table 10 and 1010 * * The longest matching prefix is ​​defined, the system returns 1010 * Output port number corresponding to R7.

[0012] 现存有多种基于前缀树的路由算法,具体可参见文献【1-15】。 [0012] There are several existing algorithms based routing prefix tree, particularly the literature [see 1-15]. 这些算法可进一步划分为串行前缀树路由算法和并行前缀树路由算法。 These algorithms may be further divided into a prefix tree routing algorithm and the serial parallel prefix tree routing algorithm. 在串行算法中,各数据包的路由查找过程严格按照时序先后执行,因此,系统的处理速度慢,吞吐量低。 In the serial algorithm, the routing of the data packets lookup process has performed in strict accordance with the timing, and therefore, the processing speed of the system is slow, low throughput. 为此,大量研究致力于设计高效的多存储器流水路由体系结构,其通过合理地调度和安排存储器资源的存取操作, 以实现多个路由查找过程的并行执行,从而极大地提高系统的整体性能【12-15】。 Accordingly, a number of studies dedicated to design efficient routing multiple memory pipeline architecture, and by scheduling reasonable arrangements for memory resources access operation, in order to achieve parallel execution of a plurality of routing lookup process, thereby greatly improving the overall performance of the system [12-15].

[0013] 参考文献: [0013] References:

[0014] [1]M. Berger,"IP lookup with low memory requirement and fast update,''in Proc. IEEE HPSRjJun 2003,287-291. [0014] [1] M. Berger, "IP lookup with low memory requirement and fast update, '' in Proc. IEEE HPSRjJun 2003,287-291.

[0015] [2]S.Sahni and K. Kim,"Efficient construction of multibit tries for IP lookup^ , IEEE/ACM Trans, on Networking, pp. 650-662, Aug 2003. [0015] [2] S.Sahni and K. Kim, "Efficient construction of multibit tries for IP lookup ^, IEEE / ACM Trans, on Networking, pp. 650-662, Aug 2003.

[0016] [3] S. Hsiehj Y. Huang and Y. Yang, iiA novel dynamic router-tables design for IP lookup and update,''in IEEE Int. Con. on Future Information Technology,May 2010, pp. 1-6. [0016] [3] S. Hsiehj Y. Huang and Y. Yang, iiA novel dynamic router-tables design for IP lookup and update, '' in IEEE Int. Con. On Future Information Technology, May 2010, pp. 1- 6.

[0017] [4]V. Srinivasan and G. Varghesej ^Faster IP lookups using controlled prefix expansion,''ACM Trans, on Computer Systems, pp. I - 40, Feb 1999. . [0017] [4] V Srinivasan and G. Varghesej ^ Faster IP lookups using controlled prefix expansion, '' ACM Trans, on Computer Systems, pp I -. 40, Feb 1999.

[0018] [5]S. Nilsson and G. Karlssonj "IP-address lookup using lc-tries," IEEE J.on Selected Areas in Communications, pp. 1083 - 1092, Jun 1999. [0018] [5] S Nilsson and G. Karlssonj "IP-address lookup using lc-tries," IEEE J.on Selected Areas in Communications, pp 1083 -.. 1092, Jun 1999.

[0019] [6]M. Degermarkj A. Brodnikj S. Carlssonj and S. Pink, "Small forwarding tables for fast routing lookups," in Proc. ACM SIGC0MM,1997,pp. 3 - 14. . [0019] [6] M Degermarkj A. Brodnikj S. Carlssonj and S. Pink, "Small forwarding tables for fast routing lookups," in Proc ACM SIGC0MM, 1997, pp 3 -.. 14.

[0020] [7]V. Ravikumar,R. Mahapatra,and J. Liu,"Modified lc-trie based efficient routing lookup, ^ in Proc. IEEE Int. Symp. on MASCOTS, Jan 2003, pp. 177 - 182. .. [0020] [7] V Ravikumar, R Mahapatra, and J. Liu, "Modified lc-trie based efficient routing lookup, ^ in Proc IEEE Int Symp on MASCOTS, Jan 2003, pp 177 -.... 182.

[0021] [8]S.Sahni and K. Kimj ^Efficient construction of fixed-stride multibit tries for IP lookup,''in IEEE Workshop on Future Trends of Distributed Computing Systems,Nov 2001,pp. 178 - 184. [0021] [8] S.Sahni and K. Kimj ^ Efficient construction of fixed-stride multibit tries for IP lookup, '' in IEEE Workshop on Future Trends of Distributed Computing Systems, Nov 2001, pp 178 -. 184.

[0022] [9]--,''Efficient construction of variable-stride multibit tries for IP lookup,''in Proc. Symp. on Applications and the Internet, Feb 2002, pp. 220 - 227. [0022] [9] -, '' Efficient construction of variable-stride multibit tries for IP lookup, '' in Proc Symp on Applications and the Internet, Feb 2002, pp 220 -... 227.

[0023] [10] S. Sahni and H. Lu,Dynamic tree bitmap for IP lookup and update,''in IEEE Int. con. on Networking, April 2007, pp. 79 - 84. [0023] [10] S. Sahni and H. Lu, Dynamic tree bitmap for IP lookup and update, '' in IEEE Int con on Networking, April 2007, pp 79 -... 84.

[0024] [11]Y. Chang, Y. Linj and C. Suj "Dynamic multiway segment tree for IP lookups and the fast pipelined search engine,''IEEE Trans, on Computers,pp. 492 - 506, Feb 2010. . [0024] [11] Y Chang, Y. Linj and C. Suj "Dynamic multiway segment tree for IP lookups and the fast pipelined search engine, '' IEEE Trans, on Computers, pp 492 -. 506, Feb 2010.

[0025] [ 12] K. Kim and S. Sahnij "Efficient construction of pipelined multibit-trie router-tables," IEEE Trans, on Computers,pp. 32 - 43,Jan 2007. [0025] [12] K. Kim and S. Sahnij "Efficient construction of pipelined multibit-trie router-tables," IEEE Trans, on Computers, pp 32 -. 43, Jan 2007.

[0026] [13]M. Bando and HJ Chao, aFlashtrie:Hash-based prefix compressed trie for IP route lookup beyond lOOGbps," in Proc. IEEE INF0C0M,March 2010, pp. 1-9. . [0026] [13] M Bando and HJ Chao, aFlashtrie: Hash-based prefix compressed trie for IP route lookup beyond lOOGbps, "in Proc IEEE INF0C0M, March 2010, pp 1-9...

[0027] [14]ff. Jiang and V.Prasannaj "A memory-balanced I inear pipeline architecture for trie-based IP lookup," in IEEE Symp. on High-Performance Interconnects,Aug 2007, pp. 83 - 90. [0027] [14] ff Jiang and V.Prasannaj "A memory-balanced I inear pipeline architecture for trie-based IP lookup," in IEEE Symp on High-Performance Interconnects, Aug 2007, pp 83 -... 90.

[0028] [15] S. Kumar,M. Becchi,P. Corwley,and J. Turner,"CAMP: fast and efficient IP lookup architecture," in ACM/1EEE Symp. on Architecture for Networking and Communications Systems, 2006, pp. 51 - 60. .. [0028] [15] S. Kumar, M Becchi, P Corwley, and J. Turner, "CAMP: fast and efficient IP lookup architecture," in ACM / 1EEE Symp on Architecture for Networking and Communications Systems, 2006,. . pp 51 - 60.

[0029] [16]T. Anderson,S. Owicki,J. Saxe,and C. Thacker,"High-speed switch scheduling for local area networks,''ACM Trans. Comput. Syst. , pp. 319 - 352, Nov 1993. ... [0029] [16] T Anderson, S Owicki, J Saxe, and C. Thacker, "High-speed switch scheduling for local area networks, '' ACM Trans Comput Syst, pp 319 -.... 352, nov 1993.

[0030] [17]N. McKeown,"The iSLIP scheduling algorithm for input-queued switches,"IEEE/ACM Trans, on Networking,pp. 188 - 201,April 1999. . [0030] [17] N McKeown, "The iSLIP scheduling algorithm for input-queued switches," IEEE / ACM Trans, on Networking, pp 188 -. 201, April 1999.

[0031] [18]P. Sanders, S. Egnerj and J. Korstj "Fast concurrent access to parallel disks,''in Proc. Ilth annual ACM-SIAM symposium on Discrete algorithms,2000, pp. 849 - 858. . [0031] [18] P Sanders, S. Egnerj and J. Korstj "Fast concurrent access to parallel disks, '' in Proc Ilth annual ACM-SIAM symposium on Discrete algorithms, 2000, pp 849 -.. 858.

[0032] [19]J. Vitter and E. Shriver,"Algorithms for parallel memory I:Two level memories. ^ pp. HO - 147, 1994. . [0032] [19] J Vitter and E. Shriver, "Algorithms for parallel memory I:.. Two level memories ^ pp HO - 147, 1994.

[0033] [20] R. Barvej E. Grove, and J. Vitterj "Simple randomized mergesort on parallel disks, ^Paral. Comput. , pp. 601 - 631, 1997. . [0033] [20] R. Barvej E. Grove, and J. Vitterj "Simple randomized mergesort on parallel disks, ^ Paral Comput, pp 601 -.. 631, 1997.

发明内容 SUMMARY

[0034] 针对以上的不足,本发明提供了一种使用随机副本分配技术和多个存储器以流水并行化多个数据包的路由信息查找操作,从而有效提高路由系统的吞吐量的基于随机副本分配的多存储器流水路由体系结构,它包含一个用于缓存数据包的路由缓冲区,一个用于控制排队数据包与各存储器间之间的读写访问的调度单元,以及一组用于存储路由表前缀树结点的存储器;所述路由表前缀树结点以随机副本分配的存储方式实现到存储器的映射;还包括索引表存储单元,所述路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树的低层结点,并被转换为一个索引表,该索引表存储在索引表存储单元中,各子树包含前缀树的高层结点,每棵子树从根结点开始按层循环地映射到存储器组中,每棵子树有两个副本,各副本被独立地映射到存储器组中,对于 [0034] To solve the above deficiencies, the present invention provides a method of using a random distribution technique and a plurality of memory copies to a plurality of parallel flow routing information packets seek operation, thereby effectively improving the throughput of the replica routing system based on random distribution the multiple memory pipeline routing architecture, the routing comprising a buffer for buffering the data packet, a scheduling unit for read and write access control line between the inter-packet with the memory, and for storing a set of routing tables the memory node prefix tree; the prefix tree node routing table in the storage assigned randomly mapped into the copy memory; further includes an index table storage unit, the routing table prefix tree is cut and divided into a primary tree multiple subtrees, the primary tree comprising a lower layer node prefix tree, and converted into an index table, the index table stored in the index table storage unit, each sub-tree comprising a top node prefix tree, each of them the sub-tree from the root of o'clock in layers cyclically mapped into memory banks, each of them has two copies of the subtree, each copy is independently mapped to memory bank, for 每个新到达路由体系结构的数据包,其将经历查找索引表的单步匹配阶段和一个涉及多次存储器访问的查找阶段来获取数据包目的地址的最长匹配前缀。 Each new packet route to the architecture which will undergo a single table lookup index step involves finding the matching phase and a phase of the plurality of memory accesses to the destination address of the packet is the longest matching prefix.

[0035] 所述索引表包含数个子项,每个子项由一个前缀域和一个或多个结点指针域构成。 [0035] The index table contains the number of sub-items, each child domain is composed of a prefix and a plurality of nodes or the target domain.

[0036] 还包括一个用于检查每个到达路由体系结构的数据包,防止路由缓冲区中出现目的地址相同的数据包的包过滤器。 [0036] further comprises a checking for each arriving packet routing architecture, packet filters to prevent the same object appears the routing address of the packet buffer.

[0037] 所述数据包的排队策略为: [0037] The queuing policy data packet is:

[0038] 1)系统时间被分割为固定大小的时隙,每个时隙被定义为一个存储访问周期,每个时隙被等分为M个时钟周期; [0038] 1) The system is divided into fixed size time slots, each time slot is defined as one memory access cycle, each time slot being divided into M clock cycles;

[0039] 2)每个到达路由缓冲区的数据包长度固定,每个时钟周期到达的数据包数量最多为1,即每个时隙最多有M数据包达到路由体系结构; [0039] 2) the length of the route for each arriving packet the buffer fixed number of packets per clock cycle to reach up to 1, i.e. each slot reaches up to M packets routing architecture;

[0040] 3)在各时隙的末尾,调度单元从路由缓冲区的排队数据包中选取一个无冲突的数据包集合,所有当选的数据包并行地访问存储器组以查找存储在各存储器中的路由信息, 如果某个当选的数据包完成了其路由信息查找任务,那么它将立刻离开路由缓冲区; [0040] 3) At the end of each slot, the scheduling unit selecting a conflict-free routing of data packets from the packet queue buffers set, all packets elected memory bank accessed in parallel to find stored in each memory routing information, if an elected packet routing information to complete its task to find, it will leave the routing buffer immediately;

[0041] 4)各存储器在一个时隙内只能被访问一次。 [0041] 4) the memories in a slot is accessed only once.

[0042] 每个到达的数据包生成一个路由信息查找请求,该路由信息查找请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是路由表前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。 [0042] each arriving packet generate a route search request information, the route search request information comprises one or more memory access operations, each memory access operation target is the routing table of a node in the prefix tree in memory location, each destination node in turn constitute a search path from the root to the end of a leaf node.

[0043] 所述调度单元的调度策略为: [0043] scheduling the scheduling unit is:

[0044] 步骤1)请求:每个尚未匹配成功的排队数据包发送一个请求给其尚未匹配成功的根存储器; [0044] Step 1) Request: Each queue has not been successfully matched data packet sends a request to the root of which has not been successfully matched memory;

[0045] 步骤2)授权:每个被请求的存储器从所有向其请求的数据包中选择一个授予访问许可; [0045] Step 2) Authorization: each memory request is selected from a permission grant access to all its request packet;

[0046] 步骤3)接受:每个被授权的数据包从所有向其授权的存储器中选择一个接受其访问许可, [0046] Step 3) receiving: each data packet is authorized to accept a selection from which access permission authorization to all its memory,

[0047] 其中,根存储器表示每棵子树的根结点所在的存储器。 [0047] wherein, each of them represents a memory storage root of the subtree root is located.

[0048] 本发明的有益效果:首先,本发明的路由体系结构提供了一个路由缓冲区来缓冲到达的数据包,从而支持日益增长的主干网带宽需求,如果路由缓冲区的包丢失率小于交换缓冲区的包丢失率,那么本发明的路由体系结构的性能就能够达到无路由缓冲区的路由系统的性能;另外,本发明通过包过滤器不但可以避免路由冗余,防止任意两个目的地址相同的数据包会被路由到同一个输出端口,还可以使目的地址相同的数据包保持先进先出的顺序;再有,本发明通过随机副本分配技术通过使用额外的存储空间来减少流水路由体系结构中存储器的访问冲突频率,可行性更高。 [0048] Advantageous effects of the present invention: Firstly, the routing architecture of the present invention provides a routing buffer to buffer arriving data packets, to support the growing backbone bandwidth requirements, if the routing packet loss rate is less than the buffer exchange buffer packet loss rate, the performance of the routing architecture of the present invention, it is possible to reach the performance of the routing system without routing buffer; Further, the present invention can not only avoid the packet filter by routing redundancy, to prevent any two destination address the same packet will be routed to the same output port, it is also possible destination address of the packet to maintain the same FIFO order; Furthermore, the present invention is to reduce the flow routing system by using additional memory space by allocation technique random copy access Violation frequency structure in memory, and more feasible.

附图说明 BRIEF DESCRIPTION

[0049] 图1为路由表示例图; [0049] FIG. 1 shows an example of the routing;

[0050] 图2为路由表对应的BT树示意图; [0050] FIG. 2 is a schematic view of tree BT corresponding routing table;

[0051] 图3为本发明的路由体系结构的示意图; [0051] The schematic FIG. 3 routing architecture of the present invention;

[0052] 图4为本发明的数据包的排队策略的结构示意图; Queuing policy configuration packets [0052] FIG. 4 is a schematic view of the invention;

[0053] 图5为双向图匹配模型示意图; [0053] FIG. 5 is a schematic model of a bidirectional map matching;

[0054] 图6为本发明初始状态和达到的数据包的示意图; [0054] Fig 6 a schematic view of an initial state and attained by the present invention, a data packet;

[0055] 图7为本发明对给定的数据包的流水并行调度的结果示意图。 [0055] FIG. 7 is a schematic flow results given packet parallel scheduling of the present invention.

具体实施方式 detailed description

[0056] 下面结合附图来本对本发明进行进一步阐述。 [0056] The present invention according to the following be further illustrated in conjunction with the accompanying drawings.

[0057] 如图3所示,本发明的基于随机副本分配的多存储器流水路由体系结构包括路由缓冲区、调度单元和存储器组,路由缓冲区用于缓存进入路由体系结构的数据包,调度单元用于控制路由缓冲区中的排队数据包与各存储器间之间的读写访问,存储器组用于存储路由表前缀树结点的存储器。 [0057] As shown in FIG. 3, the present invention is water-based multi-memory copy of the routing architecture of a random distribution includes routing buffer, the scheduling unit and a memory groups, routing buffer for buffering incoming data packet routing architecture, the scheduling unit for controlling the reading and writing between the inter-route data packets queued in the buffer of each memory access, memory bank memory for storing a routing table prefix tree node. 传统路由器的一个设计假设是:路由器能够为以任意可能的速率到达的数据包提供实时的路由服务,从而,路由器在路由阶段不需要提供缓冲区来缓存到达的数据包。 Suppose a design of traditional routers is: the router is capable of packet arrival rate in any possible routing services to provide real-time, so that the router does not need to cache buffer providing data packets arrive at the routing stage. 本发明为了提高系统吞吐量,路由体系结构提供一个路由缓冲区来缓冲到达的数据包,从而支持日益增长的主干网带宽需求,且如果路由缓冲区的包丢失率小于交换缓冲区的包丢失率,那么本发明所提出的路由体系结构的性能就能够达到无路由缓冲区的路由系统的性能。 The present invention, in order to improve system throughput, routing architecture routing buffer to provide a buffered data packet arrives, thereby supporting the growing backbone bandwidth requirements, and if the route of the packet loss rate is less than the buffer exchange buffer packet loss rate , the performance of the proposed routing architecture of the present invention, it is possible to achieve the performance of the buffer route without routing system. 另外,本发明还可以增加一个包过滤器,包过滤器检查每个到达路由体系结构的数据包,防止路由缓冲区中出现目的地址相同的数据包,使用包过滤器有两个目的:其一,避免路由冗余,任意两个目的地址相同的数据包会被路由到同一个输出端口;其二,使目的地址相同的数据包保持先进先出的顺序。 Further, the present invention can also add a packet filter, the packet filter checks each arriving packet routing architecture and to prevent the same route a packet with the destination address of the buffer, the packet filter has two purposes: First , to avoid redundant routing, the destination address of any two identical packets are routed to the same output port; secondly, that the destination address of the packet to maintain the same FIFO order.

[0058] 另外,本发明通过使用一种随机副本分配(RDA(Random Duplicate Allocation)) 的存储分配技术实现路由表前缀树结点到存储器的映射,该技术能够将任意基于前缀树的路由算法下的路由表前缀树结点均匀地映射到流水系统的所有存储器中,相较于现存的其他流水路由设计方案,随机副本分配(RDA)技术通过使用额外的存储空间来减少流水系统中存储器的访问冲突频率,该技术最先被用于并行磁盘数据的存取管理【18-20】。 The [0058] Further, the present invention is by use of a copy of a random assignment (RDA (Random Duplicate Allocation)) storage allocation technology prefix tree node routing table to the memory mapping, the technique can be any of the prefix tree based routing algorithm All routing table memory prefix tree node mapped to flow uniformly in the system, compared to other existing design flow routing, random distribution copy (RDA) techniques to reduce the memory access pipeline system through the use of additional storage space the frequency of collisions, the technology which is used for the parallel disk access management data [18-20]. 详细地说,任意路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树的低层结点,并被转换为一个索引表,各子树包含前缀树的部分高层结点,并被映射到存储器组中,进一步地, 每棵子树有两个副本,各副本被独立地映射到存储器组中。 Specifically, any routing table prefix tree is cut into a plurality of the primary tree and subtrees, the lower layer comprising the primary tree node prefix tree, and converted into an index table, each sub-tree comprising a node prefix tree top portion point, and mapped into the memory banks, further, there are two copies of each of them subtrees, each copy is independently mapped to a memory bank. 从而,如果数据包的路由查找过程能够访问两棵副本树中的任意一棵,其就能够继续执行。 Thus, if the packet routing lookup process can access any of the two copies of a tree, it will be able to continue. 我们基于以下因素考虑使用RDA 技术:在构建一个高性能路由系统的过程中,增加单个存储器的容量较之增加存储器的数量更为简单,低廉和可行。 We consider RDA technique based on: the number of constructing a high performance routing system process, increasing the capacity of a single memory, the memory is more than the increase in simple, inexpensive and practicable.

[0059] 为了方便描述,我们首先给出几个专有名词及其定义如下:当我们提到一个数据包的子树时,该子树即为系统为了获取数据包的路由信息所需遍历的目标子树,对于每棵子树,其根结点所在的存储器被称为该子树的根存储器,进一步地说,给定某个数据包,其对应子树的根存储器也可以被认为是该数据包的根存储器,由于RDA存储分配技术为每棵子树在存储器组中提供两个独立的副本,故而每个数据包有两棵子树和两个根存储器。 [0059] For convenience of description, we first give a few proper nouns and are defined as follows: When we refer to a subtree of a data packet, the sub-tree is the system in order to obtain the required routing information packets traverse target sub-tree, for each of them subtree root node where the memory which is called the root of the subtree memory, and further that a given data packet, the root sub-tree corresponding to the memory may also be considered the root packet memory, memory allocation technique RDA since each of them two separate sub-tree provides a copy of memory banks, each data packet therefore has two sub-tree root and two memories is.

[0060] 如图3所示,本发明增加了一个用于存储索引表的索引表存储单元,该索引表存储单元一般为小而快的存储器(例如SRAM),即路由表前缀树的主树被表示为一个索引表, 该索引表存储在一个小而快速的存储器中。 [0060] As shown in FIG. 3, the present invention increases the index table storage unit for storing the index table, the index table storage unit is generally small and fast memory (e.g., SRAM), i.e. the primary tree prefix tree routing table is expressed as an index table, the index table is stored in a small and fast memory. 每棵子树从根结点开始按层循环地映射到存储器组中,不失一般性地,如果根结点被存储到编号为i的存储器中,那么第j层结点被存储到编号为(i+j)modM的存储器中,对于每个新到达路由体系结构的数据包,其将经历两个处理阶段来获取数据包目的地址的最长匹配前缀,即:一个查找索引表的单步匹配阶段和一个涉及多次存储器访问的查找阶段。 Each of them start from the root subtree in layers cyclically memory mapped group, without loss of generality, if the root node is stored in the number i of the memory, then the j-th node is stored in the layer numbered ( i + j) modM memory, for each new packet arrival routing architecture, which will be subjected to two processing phases for obtaining the longest matching prefix of the destination address of the packet, namely: a lookup table index matched single step phase and a phase involves finding multiple memory accesses. 索引表包含Z个子项,每个子项由一个前缀域和一个或多个结点指针域构成,从而,一个数据包的路由信息查找过程的执行流程如下:首先, 系统获取数据包目的地址的最高r比特,并将其作为索引表的第i号子项的索引,如果第i 号子项的结点指针为空,那么本次路由查找就执行完毕;否则,系统将从该结点指针所指向的子树的根结点开始,遍历该子树以完成该路由信息查找请求。 Z index table contains sub-items, each child domain is composed of a prefix and one or more node pointers domain, thereby performing route lookup process flow of a packet as follows: First, the system obtains the maximum packet destination address r bit, and as an index of the i-th sub-item of the index table, if the i-th node of the child pointer is null, then this route lookup on finished; otherwise, the system pointer from the node pointing to the root of the subtree beginning, traversing the sub-tree routing information to complete the search request.

[0061] 为了使存储器的总吞吐量维持在较高水平,系统的调度目标是从排队数据包中选取一个尽可能大的数据包集合,该集合中的任意两个数据包的根存储器互异,根存储器不同的数据包就能够同时执行存储器访问操作。 [0061] In order to maintain the overall throughput of the memory at a high level, the scheduling target is to select a system as large a set of data packets from the packet queue, the root of the memory in the set of any two mutually different packets , different root memory data packet memory access operation can be performed simultaneously. 对于每棵子树,其根结点被随机存储到任意存储器中,而其子孙结点被按层循环地映射到根存储器的后续存储器中,因此,在数据包的路由信息查找过程中,对目标子树的遍历始于对根存储器的访问,直至到达某叶子结点所在的存储器结束,在此过程中,路由信息查找请求在每层访问一个结点,在每个存储器中访问一层,从而,当选的数据包集合的对应子树的访问操作能够按批处理方式并行地执行。 For each of them the subtree that is the root node to an arbitrary random access memory, and its descendant nodes are mapped to a subsequent memory layer by the root memory cyclically, and therefore, the information in the packet routing lookup process, the target It begins traversal of the subtree root access memory, until the memory reaches the end of a leaf node is located, in this process, the route search request information at each access to a node, one in each memory access, whereby , the access operation corresponding to the subtree elected set of data packets can be performed in parallel in batch mode. 在这里,术语批处理(batch)意味着当选的所有子树的遍历过程均始于同一时刻,并且,当所有遍历过程均完成时,该批处理过程才宣告结束。 Here, the term traversal batch (batch) means that all elected sub-trees are started in the same time, and, when all processes are completed traversing the batch process was ended.

[0062] 下面,我们给出以下假设和参数声明以建立路由体系结构的数据包排队策略: [0062] Below, we present the following assumptions and parameters declaration to establish a data packet queuing policy routing architectures:

[0063] 1)系统时间被分割为固定大小的时隙,每个时隙被定义为一个存储访问周期,进一步地,每个时隙被等分为M个时钟周期。 [0063] 1) The system is divided into fixed size time slots, each time slot is defined as one memory access cycle, and further, each time slot being divided into M clock cycles.

[0064] 2)每个到达路由缓冲区的数据包长度固定,每个时钟周期到达的数据包数量最多为1,换言之,每个时隙最多有M数据包达到路由体系结构。 [0064] 2) the length of the route for each arriving packet the buffer fixed number of packets per clock cycle to reach up to 1, in other words, each slot reaches up to M packets routing architecture.

[0065] 3)在各时隙的末尾,调度单元从排队数据包中选取一个无冲突的数据包集合,所有当选的数据包并行地访问存储器组以查找存储在各存储器中的路由信息,如果某个当选的数据包完成了其路由信息查找任务,那么它将立刻离开路由缓冲区。 [0065] 3) at the end of each slot, the scheduling unit selecting a collision-free data packets from the packet queue set, all packets elected concurrently accessing the memory to find the set of routing information stored in the memory, if an election packet routing information to complete its task to find, it will leave the routing buffer immediately.

[0066] 4)各存储器在一个时隙内只能被访问一次。 [0066] 4) the memories in a slot is accessed only once.

[0067] 为了表述方便,我们引入下面两个参数: [0067] In order to facilitate the presentation, we introduce the following two parameters:

[0068] N :路由缓冲区中排队数据包的数量,缓冲区的容量为L个数据包。 [0068] N: the number of routing packets queued in the buffer, the buffer capacity of L packets.

[0069] M :可用存储器的数量。 [0069] M: the number of available memory. 我们用符号...,mM表示存储器,为了表述方便,我们假定M为2的整数次幂。 We use the symbol ..., mM denotes a memory, to facilitate the presentation, we assume that M is an integer power of two.

[0070] 基于上述假设和参数定义,我们在图4给出所涉及的路由体系结构的排队策略, 在这个模型中,每个到达的数据包生成一个路由信息查找请求,该路由信息查找请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。 [0070] Based on the above assumptions and parameters defined, we PBR line Figure 4 shows the architecture involved in this model, each arriving packet generate a route search request information, the route search request information comprises a or multiple memory access operations, each memory access operation is to target prefix tree in memory of a node where each node target sequence constitutes a search path starting from the root to the end of a leaf node.

[0071] 调度策略 [0071] scheduling policy

[0072] 对于每个新到达的数据包,系统通过数据包地址的前r比特查询索引表以定位其子树的根结点在存储器组中的位置,然后,系统为数据包建立一个路由信息查找请求,该查找请求按层遍历目标子树以获取路由信息,我们同样使用形如R(Sl,s2, . . .,S1, . . .,st)的多维元组来表示一个路由信息查找请求。 [0072] For each new packet arrives, the system prior to r-bit lookup table indexed by the data packet address to locate the position of the root of the subtree in which memory bank, then a routing information for the system to establish a packet search request, the search request by layer traverse certain subtree for routing information, we also use the form R (Sl, s2,..., S1,..., st) multidimensional tuple represents a route lookup request.

[0073] -个数据包的路由信息查找过程分为两个阶段:调度阶段和访问阶段,在前一个阶段中,系统选取一个根存储器互不相同的数据包集合;在后一个阶段中,系统同时遍历各当选数据包的对应子树以获取路由信息,在调度阶段所得到的数据包集合的根存储器是互异的,故而,这些数据包所对应的查找请求是可以并行地执行的。 [0073] - routing information packets lookup process is divided into two phases: phase access and scheduling phase, a previous stage, a root system memory select mutually different set of data packets; at a later stage, the system while traversing each elected corresponding sub-tree for packet routing information storage root stage in the scheduling packet data obtained are set different from each other, and therefore, data packets corresponding to the search request can be performed in parallel. 在访问阶段,每个当选的数据包将遍历其目标子树,详细地说,不失一般性地,假设R1, R,为任意两个当选的数据包的路由信息查找请求,M11M,为这两个查找请求的目标根存储器,则在访问阶段,查找请求R1将循环地访问编号依次为M1, M(1+1)nmd M,Mu+2)nrod M,...的存储器序列,直到到达某个叶子结点结束,&也是如此。 In the access phase, each elected packet will traverse its target sub-tree, detail, without loss of generality, assume R1, R, lookup request for the routing information of any elected two packets, M11M, for this two destination root lookup requests memory, the access phase, the search request R1 SEQ ID memory access cycle followed M1, M (1 + 1) nmd M, Mu + 2) nrod M, ... until reaches the end of a leaf node, & well. 从而,查找请求R JP R 在整个访问阶段中都是无冲突的,其对应的每个查找步骤都可以被并行地执行。 Thus, R JP R lookup request to access the entire stage is no conflict, corresponding to each step of finding can be performed in parallel.

[0074] 如图5所示,调度阶段的仲裁问题可以被转化为一个双向图匹配问题,此处的两个顶点集分别为N个排队数据包和M个存储器,如果数据包和存储器之间存在一条弧,则表示该数据包的子树的根结点位于该存储器中,注意,每棵子树有两棵副本树,从而,我们有两棵独立地分布于存储器组中的副本树,以及两个分布在不同存储器中的根结点,通过访问该子树的任意副本,我们可以得到所需的路由信息。 [0074] As shown, the scheduler arbitration phase 5 can be converted into a bipartite graph matching problem, where the two sets of vertices are queued packets N and M memory, and if the packet between the memory an arc is present, it indicates that the root node of the subtree of the data packet located in the memory, note that each of them has two copies of the subtree tree, so we have two independent memory banks distributed copies of a tree, and two root in different memory, a copy by visiting any of the sub-tree, we can get the required routing information. 因此,图中的每个数据包顶点都有两条边。 Thus, each data packet in FIG vertex has two edges. 一般而言,我们可以使用任意双向图匹配算法来寻找存储器组和排队数据包之间的最大匹配集,这些匹配算法包括FCFS(First Come First Served),PIM(Parallel Iterative Matching)【16】,iSLIP【17】等等,调度的目标是在有资格参与调度的排队数据包和所有可用的存储器之间寻找一个最大匹配,我们在下文中给出关于该调度问题的一个算法框架。 In general, we can use any two-way graph matching algorithm to find the largest match between the set of memory banks and queuing data packets, which includes a matching algorithm FCFS (First Come First Served), PIM (Parallel Iterative Matching) [16], an iSLIP [17] and so on, scheduling goal is to find a maximum matching between eligible to participate in the scheduling of packet queues and all available memory, we present a scheduling algorithm framework on this issue below. 注意,在每轮调度阶段的开始时刻,所有的排队数据包和存储器均处于未匹配状态,系统迭代地执行如下三个步骤,直到找到一个最大匹配。 Note that, at the start time of each round of scheduling stage, all of the queued packets and the memory are in a state not match, the system iteratively performs the following three steps, until a maximum matching.

[0075] 步骤1)请求:每个尚未匹配成功的排队数据包发送一个请求给其尚未匹配成功的根存储器; [0075] Step 1) Request: Each queue has not been successfully matched data packet sends a request to the root of which has not been successfully matched memory;

[0076] 步骤2)授权:每个被请求的存储器从所有向其请求的数据包中选择一个授予访问许可; [0076] Step 2) Authorization: each memory request is selected from a permission grant access to all its request packet;

[0077] 步骤3)接受:每个被授权的数据包从所有向其授权的存储器中选择一个接受其访问许可。 [0077] Step 3) receiving: each data packet is authorized to accept a selection from which access permission authorization to all its memory.

[0078] 为了帮助理解该调度策略框架下的路由信息查找过程,我们图6和图7中给出了一个简单的调度实例,注意,调度单元采取先来先服务的调度策略控制存储器组和数据包集合间的访问操作。 [0078] To help understand the routing information in the scheduling policy framework lookup process, we are given in FIG. 6 and FIG. 7 is a simple schedule instance, note that the scheduling unit adopt FCFS scheduling policy control and data memory bank packet access operations between sets. 此处,我们假定可用的存储器数量为M = 4,且初始缓冲区数据包数N =〇,系统时间被划分为等大时隙,每个时隙被进一步细分为M个时钟周期,每个时钟周期最多有一个数据包到达路由缓冲区,为了表述方便,我们假设每个尚未匹配成功的排队数据包随机地从其尚未匹配成功的根存储器中选择一个并向其发送访问请求。 Here, we assume that the number of available memory is M = 4, and the initial buffer the number of packets N = square, like the system time is divided into a large time slots, each time slot is further subdivided into M clock cycles, each clock cycles at most one data packet route to the buffer, to facilitate the presentation, we assume that each queue has not been successfully matched randomly selects a data packet transmits its access request has not been successfully matched to the root storage.

[0079] 在时隙0, PtrPjIj达路由缓冲区,系统分别为各数据包生成路由信息查找请求Rq-R3。 [0079] In time slot 0, PtrPjIj of buffer route, the system generates each packet separately routing information for the search request Rq-R3. 此处,R。 Here, R. 对应于P。 Corresponding to P. ,R1对应于P i,以此类推。 , R1 corresponds to P i, and so on. 因为私和R 3同时请求访问m 2,故而系统发生了一个存储器访问冲突。 Since private and R 3 are requesting access m 2, and therefore a system memory access violation has occurred. 根据先来先服务的仲裁策略,m2被授权给更早到达的P2。 According to the arbitration strategy first come, first served, m2 is authorized to P2 earlier arrival. 结果是,数据包{PcPuPj的查找请求当选。 As a result, find the packet request {PcPuPj elected. 查找请求集{私,LRJ是一个无冲突访问集,其可被系统并行地执行。 {Private search request set, LRJ is a set of conflict-free access, which system can be performed in parallel. 如表中所示,该批处理作业连续消耗时隙1-3来执行存储器访问操作。 As shown in the table, the batch job continuous consumption time slots 1-3 to perform memory access operations. 显而易见,访问阶段的持续时间由所需访问次数最多的RJ夬定。 Duration Obviously, access phase by most of the required number of visits RJ Jue given.

[0080] 系统在时隙4发起下一轮调度,查找请求{R3, R5, R6, RJ当选,系统消耗三个连续时隙5-7执行当选的路由信息查找请求的存储器访问操作,近似地,在时隙8和时隙12,查找请求(R7R9IW和{IW分别当选。注意,我们不必建立查找请求R4,因为? 4的路由信息在索引表的快速匹配阶段已经被直接确定了。 [0080] Scheduling system at an initiation time slot 4, the search request {R3, R5, R6, RJ elected, the system consumes three successive time slots 5-7 performs routing information elected to find the requested memory access operation, approximately in time slot 8 and slot 12, the lookup request (R7R9IW and {IW were elected. Note that we do not have to build a lookup request R4, because? 4 routing information in the fast phase matching index table has been directly identified.

[0081] 从这个例子中,我们注意到一个数据包的路由信息查找过程包含两个处理阶段: 调度阶段和紧随其后的访问阶段。 [0081] From this example, we note that a packet routing information lookup procedure comprises two processing stages: scheduling phase and the immediately subsequent access phase. 前一个阶段持续一个时隙,后一个阶段持续一个或多个时隙。 A stage before a slot duration, the continuous phase of one or more slots. 在调度阶段,系统选取一个排队数据包集来执行路由信息查找任务。 In the dispatch stage, the system select a set of packets queued to perform routing lookup task information. 每个当选的数据包的查找请求需要消耗多个连续的时隙。 Elected for each search request packet needs to consume a plurality of successive time slots. 我们还可以看到,对任意两个连续执行的路由信息查找过程,我们可以将前一个查找过程的访问阶段和后一个查找过程的调度阶段并行地执行以进一步提高系统效率。 We can also see that the discovery process of routing information in any two consecutive execution, we can find a pre-visit and post-stage of the process to find a scheduling stage of the process is performed in parallel in order to further improve system efficiency. 然而,为了简化描述,我们在这个例子中假设路由处理过程是非重叠地顺序执行的。 However, to simplify the description, we assume that the routing process is non-overlapping sequentially performed in this example.

[0082] 以上所述仅为本发明的较佳实施方式,本发明并不局限于上述实施方式,在实施过程中可能存在局部微小的结构改动,如果对本发明的各种改动或变型不脱离本发明的精神和范围,且属于本发明的权利要求和等同技术范围之内,则本发明也意图包含这些改动和变型。 [0082] The above are only preferred embodiments of the present invention, the present invention is not limited to the above embodiment, there may be minor structural changes in the local process embodiment, the present invention, if various modifications or variations without departing from the present the spirit and scope of the invention, and the claims of the invention and equivalents within the technical scope of, the present invention intends to include these modifications and variations.

Claims (4)

  1. 1. 一种基于随机副本分配的多存储器流水路由体系结构,它包含一个用于缓存数据包的路由缓冲区,一个用于控制排队数据包与各存储器间之间的读写访问的调度单元,以及一组用于存储路由表前缀树结点的存储器;所述路由表前缀树结点以随机副本分配的存储方式实现到存储器的映射;还包括索引表存储单元,所述路由表前缀树被切分为一棵主树和多棵子树,主树包含前缀树的低层结点,并被转换为一个索引表,该索引表存储在索引表存储单元中,各子树包含前缀树的高层结点,每棵子树从根结点开始按层循环地映射到存储器组中,每棵子树有两个副本,各副本被独立地映射到存储器组中,对于每个新到达路由体系结构的数据包,其将经历查找索引表的单步匹配阶段和一个涉及多次存储器访问的查找阶段来获取数据包目的地址的最长匹配前缀; A water-based routing architecture of multiple memory copy of a random distribution, which comprises a buffer for buffering the routing of data packets, a scheduling unit for controlling the read and write access between queued data packets between each memory, and a set of memory for storing a routing table of the node prefix tree; the prefix tree node routing table in the storage assigned randomly mapped into the copy memory; further includes an index table storage unit, the routing table prefix tree is a primary tree and cut into multiple subtrees, the lower layer comprising the primary tree node prefix tree, and converted into an index table, the index table stored in the index table storage unit, each sub-tree comprising a node prefix tree level point, each of them start from the root sub-tree in layers cyclically mapped into memory banks, each of them has two copies of the subtree, each copy is independently mapped to the memory banks, for each new packet route to the architecture which will undergo a single step table lookup index matching phase and the identification phase involves a plurality of memory accesses to the destination address of the packet match the longest prefix; 所述数据包的排队策略为: 1) 系统时间被分割为固定大小的时隙,每个时隙被定义为一个存储访问周期,每个时隙被等分为M个时钟周期; 2) 每个到达路由缓冲区的数据包长度固定,每个时钟周期到达的数据包数量最多为1,即每个时隙最多有M数据包达到路由体系结构; 3) 在各时隙的末尾,调度单元从路由缓冲区的排队数据包中选取一个无冲突的数据包集合,所有当选的数据包并行地访问存储器组以查找存储在各存储器中的路由信息,如果某个当选的数据包完成了其路由信息查找任务,那么它将立刻离开路由缓冲区; 4) 各存储器在一个时隙内只能被访问一次; 所述调度单元的调度策略为: 步骤1)请求:每个尚未匹配成功的排队数据包发送一个请求给其尚未匹配成功的根存储器; 步骤2)授权:每个被请求的存储器从所有向其请求的数据包中选择一 The packet queuing policy are: 1) The system is divided into fixed size time slots, each time slot is defined as one memory access cycle, each time slot being divided into M number of clock cycles; 2) per routing a packet length buffer reaches a fixed number of packets per clock cycle to reach up to 1, i.e. each slot reaches up to M packets routing architecture; 3) at the end of each slot, the scheduling unit route selected from the queue buffer in a data packet conflict-free set of data packets, all packets elected concurrently accessing the memory to find the set of routing information stored in the memory, if a complete packet elected its route information discovery task, it will immediately leave the buffer route; 4) of each memory is accessed only once in a time slot; scheduling policy of the scheduling unit: step 1) request: each data line has not been a successful match a packet transmission request has not been successfully matched to their storage root; step 2) authorization: each memory request is selected from all packets to which a request 授予访问许可; 步骤3)接受:每个被授权的数据包从所有向其授权的存储器中选择一个接受其访问许可, 其中,根存储器表示每棵子树的根结点所在的存储器。 Granting access permission; Step 3) receiving: each data packet is authorized to accept a selection from which access permission authorization to all its memory, where each of them represents a memory storage root of the subtree root is located.
  2. 2. 根据权利要求1所述基于随机副本分配的多存储器流水路由体系结构,其特征在于,所述索引表包含数个子项,每个子项由一个前缀域和一个或多个结点指针域构成。 2. The pipeline memory 1 based on multiple copies of the routing architecture of a random distribution, wherein the index table comprising a number of sub-items, each child domain is composed of a prefix and a pointer or a plurality of nodes as claimed in claim domain .
  3. 3. 根据权利要求1所述基于随机副本分配的多存储器流水路由体系结构,其特征在于,还包括一个用于检查每个到达路由体系结构的数据包,防止路由缓冲区中出现目的地址相同的数据包的包过滤器。 The multi-memory pipeline routing architecture based on a random distribution of copies of the claim 1, characterized in that, further comprising a check for each arriving packet routing architecture, the routing buffer appears to prevent the same destination address packet filters packets.
  4. 4. 根据权利要求1所述基于随机副本分配的多存储器流水路由体系结构,其特征在于,每个到达的数据包生成一个路由信息查找请求,该路由信息查找请求包含一次或多次存储器访问操作,每次存储器访问操作的目标是路由表前缀树中某结点所在的存储器,各目标结点依序构成一条从根结点出发到某叶子结点结束的查找路径。 4. The pipeline memory 1 based on multiple copies of the routing architecture of a random distribution, wherein each arriving packet generate a route search request information, the route search request information comprises one or more memory access operations claim each target memory access operation is a routing table prefix tree in memory where the nodes, each destination node configured to find a path sequentially starting from the root to the end of a leaf node.
CN 201210248200 2012-07-17 2012-07-17 Multi pipeline routing architecture memory allocated random copy CN102739550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201210248200 CN102739550B (en) 2012-07-17 2012-07-17 Multi pipeline routing architecture memory allocated random copy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201210248200 CN102739550B (en) 2012-07-17 2012-07-17 Multi pipeline routing architecture memory allocated random copy

Publications (2)

Publication Number Publication Date
CN102739550A true CN102739550A (en) 2012-10-17
CN102739550B true CN102739550B (en) 2015-11-25

Family

ID=46994362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201210248200 CN102739550B (en) 2012-07-17 2012-07-17 Multi pipeline routing architecture memory allocated random copy

Country Status (1)

Country Link
CN (1) CN102739550B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3145134A4 (en) * 2014-06-10 2017-06-28 Huawei Technologies Co., Ltd. Lookup device, lookup configuration method and lookup method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1561047A (en) * 2004-02-20 2005-01-05 清华大学 Distributed paralled IP route searching method based on TCAM
CN101631086A (en) * 2009-08-10 2010-01-20 武汉烽火网络有限责任公司 Routing list partitioning and placing method searched by parallel IP route
CN102484610A (en) * 2010-04-12 2012-05-30 华为技术有限公司 Routing table construction method and device and routing table lookup method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005024659A1 (en) * 2003-08-11 2005-03-17 France Telecom Trie memory device with a circular pipeline mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1561047A (en) * 2004-02-20 2005-01-05 清华大学 Distributed paralled IP route searching method based on TCAM
CN101631086A (en) * 2009-08-10 2010-01-20 武汉烽火网络有限责任公司 Routing list partitioning and placing method searched by parallel IP route
CN102484610A (en) * 2010-04-12 2012-05-30 华为技术有限公司 Routing table construction method and device and routing table lookup method and device

Also Published As

Publication number Publication date Type
CN102739550A (en) 2012-10-17 application

Similar Documents

Publication Publication Date Title
Song et al. Ipv6 lookups using distributed and load balanced bloom filters for 100gbps core router line cards
Taylor Survey and taxonomy of packet classification techniques
US8230110B2 (en) Work-conserving packet scheduling in network devices
US7181742B2 (en) Allocation of packets and threads
US7324509B2 (en) Efficient optimization algorithm in memory utilization for network applications
Dharmapurikar et al. Longest prefix matching using bloom filters
US20030110166A1 (en) Queue management
Pei et al. Putting routing tables in silicon
US5796944A (en) Apparatus and method for processing data frames in an internetworking device
US6434144B1 (en) Multi-level table lookup
US6854117B1 (en) Parallel network processor array
US20040037302A1 (en) Queuing and de-queuing of data with a status cache
Ravikumar et al. TCAM architecture for IP lookup using prefix properties
US20040249803A1 (en) Architecture for network search engines with fixed latency, high capacity, and high throughput
Dharmapurikar et al. Longest prefix matching using bloom filters
US20060200825A1 (en) System and method for dynamic ordering in a network processor
US20070286194A1 (en) Method and Device for Processing Data Packets
Iyer et al. Designing packet buffers for router linecards
Yu et al. BUFFALO: Bloom filter forwarding architecture for large organizations
US20050100017A1 (en) Using ordered locking mechanisms to maintain sequences of items such as packets
US5938736A (en) Search engine architecture for a high performance multi-layer switch element
US20060031628A1 (en) Buffer management in a network device without SRAM
US7155588B1 (en) Memory fence with background lock release
Van Lunteren Searching very large routing tables in wide embedded memory
US20050220112A1 (en) Distributed packet processing with ordered locks to maintain requisite packet orderings

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C14 Grant of patent or utility model